724 research outputs found
Deep Neural Network with l2-norm Unit for Brain Lesions Detection
Automated brain lesions detection is an important and very challenging
clinical diagnostic task because the lesions have different sizes, shapes,
contrasts, and locations. Deep Learning recently has shown promising progress
in many application fields, which motivates us to apply this technology for
such important problem. In this paper, we propose a novel and end-to-end
trainable approach for brain lesions classification and detection by using deep
Convolutional Neural Network (CNN). In order to investigate the applicability,
we applied our approach on several brain diseases including high and low-grade
glioma tumor, ischemic stroke, Alzheimer diseases, by which the brain Magnetic
Resonance Images (MRI) have been applied as an input for the analysis. We
proposed a new operating unit which receives features from several projections
of a subset units of the bottom layer and computes a normalized l2-norm for
next layer. We evaluated the proposed approach on two different CNN
architectures and number of popular benchmark datasets. The experimental
results demonstrate the superior ability of the proposed approach.Comment: Accepted for presentation in ICONIP-201
Expected exponential loss for gaze-based video and volume ground truth annotation
Many recent machine learning approaches used in medical imaging are highly
reliant on large amounts of image and ground truth data. In the context of
object segmentation, pixel-wise annotations are extremely expensive to collect,
especially in video and 3D volumes. To reduce this annotation burden, we
propose a novel framework to allow annotators to simply observe the object to
segment and record where they have looked at with a \$200 eye gaze tracker. Our
method then estimates pixel-wise probabilities for the presence of the object
throughout the sequence from which we train a classifier in semi-supervised
setting using a novel Expected Exponential loss function. We show that our
framework provides superior performances on a wide range of medical image
settings compared to existing strategies and that our method can be combined
with current crowd-sourcing paradigms as well.Comment: 9 pages, 5 figues, MICCAI 2017 - LABELS Worksho
A Dynamic Programming Solution to Bounded Dejittering Problems
We propose a dynamic programming solution to image dejittering problems with
bounded displacements and obtain efficient algorithms for the removal of line
jitter, line pixel jitter, and pixel jitter.Comment: The final publication is available at link.springer.co
HeMIS: Hetero-Modal Image Segmentation
We introduce a deep learning image segmentation framework that is extremely
robust to missing imaging modalities. Instead of attempting to impute or
synthesize missing data, the proposed approach learns, for each modality, an
embedding of the input image into a single latent vector space for which
arithmetic operations (such as taking the mean) are well defined. Points in
that space, which are averaged over modalities available at inference time, can
then be further processed to yield the desired segmentation. As such, any
combinatorial subset of available modalities can be provided as input, without
having to learn a combinatorial number of imputation models. Evaluated on two
neurological MRI datasets (brain tumors and MS lesions), the approach yields
state-of-the-art segmentation results when provided with all modalities;
moreover, its performance degrades remarkably gracefully when modalities are
removed, significantly more so than alternative mean-filling or other synthesis
approaches.Comment: Accepted as an oral presentation at MICCAI 201
Measurement of the reaction \gamma p \TO K^ + \Lambda(1520) at photon energies up to 2.65 GeV
The reaction \gamma p \TO K^+\Lambda(1520) was measured in the energy range
from threshold to 2.65 GeV with the SAPHIR detector at the electron stretcher
facility ELSA in Bonn. The production cross section was
analyzed in the decay modes , , , and
as a function of the photon energy and the squared
four-momentum transfer . While the cross sections for the inclusive
reactions rise steadily with energy, the cross section of the process \gamma p
\TO K^+\Lambda(1520) peaks at a photon energy of about 2.0 GeV, falls off
exponentially with , and shows a slope flattening with increasing photon
energy. The angular distributions in the -channel helicity system indicate
neither a nor a exchange dominance. The interpretation of the
as a molecule is not supported.Comment: 11 pages, 16 figures, 4 table
Segmentation of image ensembles via latent atlases
Spatial priors, such as probabilistic atlases, play an important role in MRI segmentation. However, the availability of comprehensive, reliable and suitable manual segmentations for atlas construction is limited. We therefore propose a method for joint segmentation of corresponding regions of interest in a collection of aligned images that does not require labeled training data. Instead, a latent atlas, initialized by at most a single manual segmentation, is inferred from the evolving segmentations of the ensemble. The algorithm is based on probabilistic principles but is solved using partial differential equations (PDEs) and energy minimization criteria. We evaluate the method on two datasets, segmenting subcortical and cortical structures in a multi-subject study and extracting brain tumors in a single-subject multi-modal longitudinal experiment. We compare the segmentation results to manual segmentations, when those exist, and to the results of a state-of-the-art atlas-based segmentation method. The quality of the results supports the latent atlas as a promising alternative when existing atlases are not compatible with the images to be segmented.National Institutes of Health (U.S.) (National Institute for Biomedical Imaging and Bioengineering (U.S.)/National Alliance for Medical Image Computing (U.S.) U54-EB005149)National Institutes of Health (U.S.) (National Center for Research Resources (U.S.)/Neuroimaging Analysis Center (U.S.) P41-RR13218)National Institutes of Health (U.S.) (National Institute of Neurological Disorders and Stroke (U.S.) R01-NS051826)National Institutes of Health (U.S.) (National Center for Research Resources (U.S.)/Biomedical Informatics Research Network U24-RR021382)National Science Foundation (U.S.) (CAREER Award 0642971)German Academy of Sciences Leopoldina (Fellowship LPDS 2009-10)Academy of Finland (Grant 133611
Joint Optical Flow and Temporally Consistent Semantic Segmentation
The importance and demands of visual scene understanding have been steadily
increasing along with the active development of autonomous systems.
Consequently, there has been a large amount of research dedicated to semantic
segmentation and dense motion estimation. In this paper, we propose a method
for jointly estimating optical flow and temporally consistent semantic
segmentation, which closely connects these two problem domains and leverages
each other. Semantic segmentation provides information on plausible physical
motion to its associated pixels, and accurate pixel-level temporal
correspondences enhance the accuracy of semantic segmentation in the temporal
domain. We demonstrate the benefits of our approach on the KITTI benchmark,
where we observe performance gains for flow and segmentation. We achieve
state-of-the-art optical flow results, and outperform all published algorithms
by a large margin on challenging, but crucial dynamic objects.Comment: 14 pages, Accepted for CVRSUAD workshop at ECCV 201
AIFNet: Automatic Vascular Function Estimation for Perfusion Analysis Using Deep Learning
Perfusion imaging is crucial in acute ischemic stroke for quantifying the
salvageable penumbra and irreversibly damaged core lesions. As such, it helps
clinicians to decide on the optimal reperfusion treatment. In perfusion CT
imaging, deconvolution methods are used to obtain clinically interpretable
perfusion parameters that allow identifying brain tissue abnormalities.
Deconvolution methods require the selection of two reference vascular functions
as inputs to the model: the arterial input function (AIF) and the venous output
function, with the AIF as the most critical model input. When manually
performed, the vascular function selection is time demanding, suffers from poor
reproducibility and is subject to the professionals' experience. This leads to
potentially unreliable quantification of the penumbra and core lesions and,
hence, might harm the treatment decision process. In this work we automatize
the perfusion analysis with AIFNet, a fully automatic and end-to-end trainable
deep learning approach for estimating the vascular functions. Unlike previous
methods using clustering or segmentation techniques to select vascular voxels,
AIFNet is directly optimized at the vascular function estimation, which allows
to better recognise the time-curve profiles. Validation on the public ISLES18
stroke database shows that AIFNet reaches inter-rater performance for the
vascular function estimation and, subsequently, for the parameter maps and core
lesion quantification obtained through deconvolution. We conclude that AIFNet
has potential for clinical transfer and could be incorporated in perfusion
deconvolution software.Comment: Preprint submitted to Elsevie
Automatic Brain Tumor Segmentation using Convolutional Neural Networks with Test-Time Augmentation
Automatic brain tumor segmentation plays an important role for diagnosis,
surgical planning and treatment assessment of brain tumors. Deep convolutional
neural networks (CNNs) have been widely used for this task. Due to the
relatively small data set for training, data augmentation at training time has
been commonly used for better performance of CNNs. Recent works also
demonstrated the usefulness of using augmentation at test time, in addition to
training time, for achieving more robust predictions. We investigate how
test-time augmentation can improve CNNs' performance for brain tumor
segmentation. We used different underpinning network structures and augmented
the image by 3D rotation, flipping, scaling and adding random noise at both
training and test time. Experiments with BraTS 2018 training and validation set
show that test-time augmentation helps to improve the brain tumor segmentation
accuracy and obtain uncertainty estimation of the segmentation results.Comment: 12 pages, 3 figures, MICCAI BrainLes 201
Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation: The M&Ms Challenge
The emergence of deep learning has considerably advanced the state-of-the-art in cardiac magnetic resonance (CMR) segmentation. Many techniques have been proposed over the last few years, bringing the accuracy of automated segmentation close to human performance. However, these models have been all too often trained and validated using cardiac imaging samples from single clinical centres or homogeneous imaging protocols. This has prevented the development and validation of models that are generalizable across different clinical centres, imaging conditions or scanner vendors. To promote further research and scientific benchmarking in the field of generalizable deep learning for cardiac segmentation, this paper presents the results of the Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation (M&Ms) Challenge, which was recently organized as part of the MICCAI 2020 Conference. A total of 14 teams submitted different solutions to the problem, combining various baseline models, data augmentation strategies, and domain adaptation techniques. The obtained results indicate the importance of intensity-driven data augmentation, as well as the need for further research to improve generalizability towards unseen scanner vendors or new imaging protocols. Furthermore, we present a new resource of 375 heterogeneous CMR datasets acquired by using four different scanner vendors in six hospitals and three different countries (Spain, Canada and Germany), which we provide as open-access for the community to enable future research in the field
- …